Ethics in Generative AI Governance
Reflective Piece (Using Driscoll’s Model)

View Discussion Submission

What?

This activity required me to critically examine the ethical challenges introduced by the rapid adoption of generative AI, drawing primarily on Corrêa et al. (2023) and Deckard (2023). In my discussion post, I focused on a central tension emerging across global AI governance frameworks, whether AI should primarily enhance human capabilities or replace human labour. Using Corrêa et al.’s comparative analysis of 200 AI ethics guidelines, I highlighted the widespread agreement on high-level values such as transparency, accountability, and human-centredness, alongside a critical weakness, the lack of enforceable mechanisms. I complemented this with Deckard’s professional perspective, which frames AI ethics as a lived, interdisciplinary practice rather than a theoretical checklist. My submission ultimately argued for a model of human–AI collaboration that preserves professional development, accountability, and ethical judgment within computing roles.

So what?

Reflecting on this task made me reconsider how ethical principles translate into real organisational behaviour. Prior to this activity, I tended to view AI governance largely through a technical or policy lens, focusing on compliance, bias mitigation, and data protection. Engaging with Corrêa et al.’s findings revealed a deeper issue, that ethical consensus without enforcement risks becoming performative. The fact that the vast majority of global AI ethics frameworks are non-binding forced me to question how organisations might publicly align with ethical values while privately deploying AI in ways that erode professional roles and accountability.

The distinction between augmentation and replacement also sharpened my awareness of longer-term social consequences. I had previously associated automation debates mainly with efficiency and productivity, but this reflection highlighted how removing entry-level roles undermines the professional pipeline itself. Deckard’s emphasis on human judgment, communication, and context reinforced that ethical oversight cannot be fully automated. This reframed my understanding of professionalism in computing, not simply as technical competence, but as stewardship of systems that shape careers, trust, and institutional responsibility.

Now what?

Going forward, this reflection will influence how I evaluate and advocate for AI use in professional settings. I am now more inclined to question not just whether an AI system works, but what human roles it displaces or reshapes. In practical terms, this means supporting AI implementations that explicitly keep humans in the loop, particularly in decision-making, oversight, and ethical review. I also see greater value in formal mechanisms such as impact assessments and internal ethics governance, which move ethical commitments beyond policy statements into operational practice.

From an academic perspective, this activity has strengthened my ability to critically assess emerging technologies within broader legal, social, and professional contexts. As I progress through the MSc in Data Science, I intend to integrate ethical evaluation alongside technical design, treating ethics not as an external constraint but as a core design requirement. Ultimately, this reflection reinforced that the sustainability of the computing profession depends not on how efficiently AI can replace humans, but on how responsibly it can be used to support human judgment, development, and accountability.

References

  • Corrêa, N.K. et al. (2023) ‘Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance’, Patterns, 4(10), 100857.
  • Deckard, R. (2023) ‘What are ethics in AI?’ BCS – The Chartered Institute for IT.
  • Driscoll, J. (2007) Practising Clinical Supervision: A Reflective Approach for Healthcare Professionals. 2nd ed. Edinburgh: Elsevier.